statistical modeling
Statistical Modeling of Cell Assemblies Activities in Associative Cortex of Behaving Monkeys
So far there has been no general method for relating extracellular electrophysiological measured activity of neurons in the associative cortex to underlying network or "cognitive" states. We propose to model such data using a multivariate Poisson Hidden Markov Model. We demonstrate the application of this approach for tem(cid:173) poral segmentation of the firing patterns, and for characterization of the cortical responses to external stimuli. Using such a statisti(cid:173) cal model we can significantly discriminate two behavioral modes of the monkey, and characterize them by the different firing pat(cid:173) terns, as well as by the level of coherency of their multi-unit firing activity. Our study utilized measurements carried out on behaving Rhesus monkeys by M. Abeles, E. Vaadia, and H. Bergman, of the Hadassa Medical School of the Hebrew University.
Statistical Modeling of Images with Fields of Gaussian Scale Mixtures
The local statistical properties of photographic images, when represented in a multi-scale basis, have been described using Gaussian scale mixtures (GSMs). Here, we use this local description to construct a global field of Gaussian scale mixtures (FoGSM). We show that parameter estimation for FoGSM is feasible, and that samples drawn from an estimated FoGSM model have marginal and joint statistics similar to wavelet coefficients of photographic images. We develop an algorithm for image denoising based on the FoGSM model, and demonstrate substantial improvements over current state-ofthe-art denoising method based on the local GSM model. Many successful methods in image processing and computer vision rely on statistical models for images, and it is thus of continuing interest to develop improved models, both in terms of their ability to precisely capture image structures, and in terms of their tractability when used in applications.
Statistical Modeling in Machine Learning - 1st Edition
Tilottama Goswami has received a BE degree with Honors in Computer Science and Engineering from the National Institute of Technology, Durgapur; and an MS degree in Computer Science (High Distinction) from Rivier University, Nashua, New Hampshire, United States. She was awarded a PhD in Computer Science from the University of Hyderabad. Presently, Dr. Goswami is Professor in the Department of Information Technology, Vasavi College of Engineering, Hyderabad, India. She has, overall, 23 years of experience in academia, research, and the IT industry. Her research interests are computer vision, machine learning, and image processing.
- Asia > India > Telangana > Hyderabad (0.30)
- North America > United States > New Hampshire > Hillsborough County > Nashua (0.27)
Time Series Analysis (1)
Time series is a sequence(series) of data that arranged by certain(uniform) interval of time. Among them, discrete time series is a case that a set T of the time t that occurred is discrete, if time t occurs continuously then this is a continuous time series. In general, Time Series Sequence are self-correlated. Namely, data from the past affects beyond the present and into the future. It means covariance of one and others are not equal to zero.
Similarities and Differences between Machine Learning and Traditional Advanced Statistical Modeling in Healthcare Analytics
Bennett, Michele, Hayes, Karin, Kleczyk, Ewa J., Mehta, Rajesh
Data scientists and statisticians are often at odds when determining the best approach, machine learning or statistical modeling, to solve an analytics challenge. However, machine learning and statistical modeling are more cousins than adversaries on different sides of an analysis battleground. Choosing between the two approaches or in some cases using both is based on the problem to be solved and outcomes required as well as the data available for use and circumstances of the analysis. Machine learning and statistical modeling are complementary, based on similar mathematical principles, but simply using different tools in an overall analytics knowledge base. Determining the predominant approach should be based on the problem to be solved as well as empirical evidence, such as size and completeness of the data, number of variables, assumptions or lack thereof, and expected outcomes such as predictions or causality. Good analysts and data scientists should be well versed in both techniques and their proper application, thereby using the right tool for the right project to achieve the desired results.
- North America > United States > Maine > Penobscot County > Orono (0.14)
- North America > United States > New York (0.04)
- North America > United States > New Jersey > Mercer County > Princeton (0.04)
- (3 more...)
Bayesian hierarchical stacking: Some models are (somewhere) useful « Statistical Modeling, Causal Inference, and Social Science
Stacking is a widely used model averaging technique that asymptotically yields optimal predictions among linear averages. We show that stacking is most effective when model predictive performance is heterogeneous in inputs, and we can further improve the stacked mixture with a hierarchical model. We generalize stacking to Bayesian hierarchical stacking. The model weights are varying as a function of data, partially-pooled, and inferred using Bayesian inference. We further incorporate discrete and continuous inputs, other structured priors, and time series and longitudinal data.
In defense of statistical modeling
Data science has been hot for many years now, attracting attention and talent. There is a persistent thread of commentary, though, that says data science's core skill of statistical modeling is overhyped and that managers and aspiring data scientists should focus on engineering instead. Vicki Boykis' 2019 blog post was the first article I remember along these lines. Don't do a degree in data science, don't do a bootcamp…It's much easier to come into a data science and tech career through the "back door", i.e. starting out as a junior developer, or in DevOps, project management, and, perhaps most relevant, as a data analyst, information manager, or similar… While tuning models, visualization, and analysis make up some component of your time as a data scientist, data science is and has always been primarily about getting clean data in a single place to be used for interpolation. More recently, Gartner's 2020 AI hype cycle report acknowledges the role of data scientists but says: Gartner foresees developers being the major force in AI.
Revisiting Rashomon: A Comment on "The Two Cultures"
Here, I provide some reflections on Prof. Leo Breiman's "The Two Cultures" paper. I focus specifically on the phenomenon that Breiman dubbed the "Rashomon Effect", describing the situation in which there are many models that satisfy predictive accuracy criteria equally well, but process information in the data in substantially different ways. This phenomenon can make it difficult to draw conclusions or automate decisions based on a model fit to data. I make connections to recent work in the Machine Learning literature that explore the implications of this issue, and note that grappling with it can be a fruitful area of collaboration between the algorithmic and data modeling cultures.
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.04)
- North America > United States > California > San Diego County > San Diego (0.04)
Comments on Leo Breiman's paper 'Statistical Modeling: The Two Cultures' (Statistical Science, 2001, 16(3), 199-231)
Breiman challenged statisticians to think more broadly, to step into the unknown, model-free learning world, with him paving the way forward. Statistics community responded with slight optimism, some skepticism, and plenty of disbelief. Today, we are at the same crossroad anew. Faced with the enormous practical success of model-free, deep, and machine learning, we are naturally inclined to think that everything is resolved. A new frontier has emerged; the one where the role, impact, or stability of the {\it learning} algorithms is no longer measured by prediction quality, but an inferential one -- asking the questions of {\it why} and {\it if} can no longer be safely ignored.
- North America > United States > Massachusetts > Middlesex County > Waltham (0.04)
- North America > United States > California > San Diego County > San Diego (0.04)
- North America > United States > California > San Diego County > La Jolla (0.04)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
Common terminologies used in Machine Learning and Artificial Intelligence
In this article, we'll introduce you to various common terminologies used in the machine learning and artificial intelligence industry. Without any further delay let's begin! Note: If you are more interested in learning concepts in an Audio-Visual format, We have this entire article explained in the video below. If not, you may continue reading. In order to explain these terms, I'm going to use a very simple chart- I represent "the value-added to an organization" while on the vertical axis, I am representing "the complexity of doing this practice".